昼夜节律的破坏是阿尔茨海默氏病(AD)患者的基本症状。人类脑中基因表达的完整昼夜节律编排及其与AD的固有关联仍然很大程度上是未知的。我们提出了一种新颖的综合方法,即Prime,以检测和分析在多个数据集中不合时宜的高维基因表达数据中的节奏振荡模式。为了证明Prime的实用性,首先,我们通过从小鼠肝脏中的时间课程表达数据集作为跨物种和跨器官验证来对其进行验证。然后,我们将其应用于研究来自19个对照和AD患者的19个人脑区域的未接收基因组基因表达中的振荡模式。我们的发现揭示了15对控制大脑区域中清晰,同步的振荡模式,而这些振荡模式要么消失或昏暗。值得注意的是,Prime在不需要样品的时间戳而发现昼夜节律的节奏模式。 Prime的代码以及在本文中复制数字的代码,可在https://github.com/xinxingwu-uk/prime上获得。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
图形神经网络已用于各种学习任务,例如链接预测,节点分类和节点群集。其中,链接预测是一项相对研究的图形学习任务,其当前最新模型基于浅层图自动编码器(GAE)体系结构的一层或两层。在本文中,我们专注于解决链接预测的当前方法的局限性,该预测只能使用浅的GAE和变分GAE,并创建有效的方法来加深(变异)GAE架构以实现稳定和竞争性的性能。我们提出的方法是创新的方法将标准自动编码器(AES)纳入GAE的体系结构,在该体系结构中,标准AE被利用以通过无缝整合邻接信息和节点来学习必要的,低维的表示,而GAE则进一步构建了多尺度的低规模的低尺度低尺度的低尺度。通过残差连接的维度表示,以学习紧凑的链接预测的整体嵌入。从经验上讲,在各种基准测试数据集上进行的广泛实验验证了我们方法的有效性,并证明了我们加深的图形模型以进行链接预测的竞争性能。从理论上讲,我们证明我们的深度扩展包括具有不同阶的多项式过滤器。
translated by 谷歌翻译
特征选择通过识别最具信息性功能的子集来减少数据的维度。在本文中,我们为无监督的特征选择提出了一种创新的框架,称为分形Automencoders(FAE)。它列举了一个神经网络,以确定全球探索能力和局部挖掘的多样性的信息。架构上,FAE通过添加一对一的评分层和小子神经网络来扩展AutoEncoders,以便以无监督的方式选择特征选择。通过这种简洁的建筑,Fae实现了最先进的表演;在十四个数据集中的广泛实验结果,包括非常高维数据,已经证明了FAE对未经监督特征选择的现有现代方法的优越性。特别是,FAE对基因表达数据探索具有实质性优势,通过广泛使用的L1000地标基因将测量成本降低约15美元。此外,我们表明FAE框架与应用程序很容易扩展。
translated by 谷歌翻译
特征选择作为一种重要的尺寸减少技术,通过识别输入特征的基本子集来减少数据维度,这可以促进可解释的洞察学习和推理过程。算法稳定性是关于其对输入样本扰动的敏感性的算法的关键特征。在本文中,我们提出了一种创新的无监督特征选择算法,其具有可提供保证的这种稳定性。我们的算法的体系结构包括一个特征得分手和特征选择器。得分手训练了一个神经网络(NN)来全局评分所有功能,并且选择器采用从属子NN,以在本地评估选择特征的表示能力。此外,我们提供算法稳定性分析,并显示我们的算法通过泛化误差绑定的性能保证。实际数据集的广泛实验结果表明了我们所提出的算法的卓越泛化性能,以强大的基线方法。此外,我们的理论分析和我们算法选择特征的稳定性揭示的属性是经验证实的。
translated by 谷歌翻译
Focusing on the complicated pathological features, such as blurred boundaries, severe scale differences between symptoms, background noise interference, etc., in the task of retinal edema lesions joint segmentation from OCT images and enabling the segmentation results more reliable. In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network, which can provide accurate segmentation results with reliability assessment. Specifically, aiming at improving the model's ability to learn the complex pathological features of retinal edema lesions in OCT images, we develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module of our newly designed. Meanwhile, to make the segmentation results more reliable, a novel uncertainty segmentation head based on the subjective logical evidential theory is introduced to generate the final segmentation results with a corresponding overall uncertainty evaluation score map. We conduct comprehensive experiments on the public database of AI-Challenge 2018 for retinal edema lesions segmentation, and the results show that our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches. The code will be released on: https://github.com/LooKing9218/ReliableRESeg.
translated by 谷歌翻译
Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译
基于步态阶段的控制是步行AID机器人的热门研究主题,尤其是机器人下限假体。步态阶段估计是基于步态阶段控制的挑战。先前的研究使用了人类大腿角的整合或差异来估计步态阶段,但是累积的测量误差和噪声可能会影响估计结果。在本文中,提出了一种更健壮的步态相估计方法,使用各种运动模式的分段单调步态相位大角模型的统一形式。步态相仅根据大腿角度估算,这是一个稳定的变量,避免了相位漂移。基于卡尔曼滤波器的平滑液旨在进一步抑制估计步态阶段的突变。基于提出的步态相估计方法,基于步态阶段的关节角跟踪控制器是为跨股骨假体设计的。提出的步态估计方法,步态相和控制器通过在各种运动模式下的步行数据进行离线分析来评估。基于步态阶段的控制器的实时性能在经际假体的实验中得到了验证。
translated by 谷歌翻译